Multi-Scale and U-shaped Networks are widely used in various image restoration problems, including deblurring. Keeping in mind the wide range of applications, we present a comparison of these architectures and their effects on image deblurring. We also introduce a new block called as NFResblock. It consists of a Fast Fourier Transformation layer and a series of modified Non-Linear Activation Free Blocks. Based on these architectures and additions, we introduce NFResnet and NFResnet+, which are modified multi-scale and U-Net architectures, respectively. We also use three different loss functions to train these architectures: Charbonnier Loss, Edge Loss, and Frequency Reconstruction Loss. Extensive experiments on the Deep Video Deblurring dataset, along with ablation studies for each component, have been presented in this paper. The proposed architectures achieve a considerable increase in Peak Signal to Noise (PSNR) ratio and Structural Similarity Index (SSIM) value.
translated by 谷歌翻译
Human activity recognition (HAR) using drone-mounted cameras has attracted considerable interest from the computer vision research community in recent years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is comprised of two parts. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26%, 18.56%, and 2.94%, respectively.
translated by 谷歌翻译
Drone-camera based human activity recognition (HAR) has received significant attention from the computer vision research community in the past few years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Fusion (SWTF) module to utilize sparsely sampled video frames for obtaining global weighted temporal fusion outcome. The proposed SWTF is divided into two components. First, a temporal segment network that sparsely samples a given set of frames. Second, weighted temporal fusion, that incorporates a fusion of feature maps derived from optical flow, with raw RGB images. This is followed by base-network, which comprises a convolutional neural network module along with fully connected layers that provide us with activity recognition. The SWTF network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a significant margin.
translated by 谷歌翻译
Deep learning has achieved notable success in 3D object detection with the advent of large-scale point cloud datasets. However, severe performance degradation in the past trained classes, i.e., catastrophic forgetting, still remains a critical issue for real-world deployment when the number of classes is unknown or may vary. Moreover, existing 3D class-incremental detection methods are developed for the single-domain scenario, which fail when encountering domain shift caused by different datasets, varying environments, etc. In this paper, we identify the unexplored yet valuable scenario, i.e., class-incremental learning under domain shift, and propose a novel 3D domain adaptive class-incremental object detection framework, DA-CIL, in which we design a novel dual-domain copy-paste augmentation method to construct multiple augmented domains for diversifying training distributions, thereby facilitating gradual domain adaptation. Then, multi-level consistency is explored to facilitate dual-teacher knowledge distillation from different domains for domain adaptive class-incremental learning. Extensive experiments on various datasets demonstrate the effectiveness of the proposed method over baselines in the domain adaptive class-incremental learning scenario.
translated by 谷歌翻译
组织学图像中核和腺体的实例分割是用于癌症诊断,治疗计划和生存分析的计算病理学工作流程中的重要一步。随着现代硬件的出现,大规模质量公共数据集的最新可用性以及社区组织的宏伟挑战已经看到了自动化方法的激增,重点是特定领域的挑战,这对于技术进步和临床翻译至关重要。在这项调查中,深入分析了过去五年(2017-2022)中发表的原子核和腺体实例细分的126篇论文,进行了深入分析,讨论了当前方法的局限性和公开挑战。此外,提出了潜在的未来研究方向,并总结了最先进方法的贡献。此外,还提供了有关公开可用数据集的概括摘要以及关于说明每种挑战的最佳性能方法的巨大挑战的详细见解。此外,我们旨在使读者现有研究的现状和指针在未来的发展方向上开发可用于临床实践的方法,从而可以改善诊断,分级,预后和癌症的治疗计划。据我们所知,以前没有工作回顾了朝向这一方向的组织学图像中的实例细分。
translated by 谷歌翻译
深度神经网络(DNNS)已被证明在各种应用程序中都成功了,例如语音识别和合成,计算机视觉,机器翻译和游戏播放,仅举几例。但是,现有的深度神经网络模型在计算上是昂贵且内存密集型的,阻碍了其在存储器资源低或具有严格延迟要求的应用程序中的部署。因此,一种自然的想法是在深网中执行模型压缩和加速度,而不会显着降低模型性能,这就是我们所谓的降低复杂性。在以下工作中,我们尝试通过将其知识提炼为基于CNN的模型,从而降低自然语言任务的最新模型状态LSTM模型的复杂性,从而减少测试过程中的推理时间(或延迟)。
translated by 谷歌翻译
随着大型预训练的语言模型(例如GPT-2和BERT)的广泛可用性,最近的趋势是微调一个预训练的模型,以在下游任务上实现最新的性能。一个自然的示例是“智能回复”应用程序,其中调整了预训练的模型以为给定的查询消息提供建议的答复。由于这些模型通常是使用敏感数据(例如电子邮件或聊天成绩单)调整的,因此了解和减轻模型泄漏其调整数据的风险很重要。我们研究了典型的智能回复管道中的潜在信息泄漏漏洞,并引入了一种新型的主动提取攻击,该攻击利用包含敏感数据的文本中的规范模式。我们通过实验表明,对手可以提取培训数据中存在的敏感用户信息。我们探讨了潜在的缓解策略,并从经验上证明了差异隐私如何成为这种模式提取攻击的有效防御机制。
translated by 谷歌翻译
本研究旨在为印地语开发半自动标记的韵律数据库,用于增强ASR和TTS系统中的语调组件,这也有助于向语音机翻译系统构建语音。虽然印地语中没有单一的韵律标签标准,但过去的研究人员在文献中使用了感知和统计方法,以利用印地语中韵律模式的行为的推论。基于此类现有研究并在很大程度上商定了印地语中的语调理论,这项研究试图首先开发印地语语音数据的手动注释的韵律语料库,然后用于培训用于产生自动韵律标签的预测模型。已经标记了总数为5,000句话(23,500字)的声明和疑问类型。训练有素的型号的音高型号,中级短语和呼吸界界限分别为73.40%,93.20%和43%。
translated by 谷歌翻译
冠状病毒大流行的严重程度需要有效的行政决定。在印度超过4万人的人屈服于Covid-19,拥有超过3亿卢比的确认案例,仍然计数。合理的第三波的威胁继续困扰数百万。在这种不断变化的病毒动态中,预测性建模方法可以用作整体工具。大流行进一步引发了一个前所未有的社交媒体使用。本文旨在提出一种利用社交媒体,特别推特的方法来预测与Covid-19案件相关的即将发生的情景。在这项研究中,我们寻求了解Covid-19相关推文的潮流如何表明案件的增加。这种前瞻性分析可用于帮助管理员及时资源分配,以减少损坏的严重程度。使用Word Embeddings来捕获推文的语义含义,我们识别大量尺寸(SDS).Or方法,预测患情况的上升时间为15天,30天,R2分别为0.80和0.62。最后,我们解释了SDS的主题效用。
translated by 谷歌翻译